35 research outputs found

    RoboJam: A Musical Mixture Density Network for Collaborative Touchscreen Interaction

    Full text link
    RoboJam is a machine-learning system for generating music that assists users of a touchscreen music app by performing responses to their short improvisations. This system uses a recurrent artificial neural network to generate sequences of touchscreen interactions and absolute timings, rather than high-level musical notes. To accomplish this, RoboJam's network uses a mixture density layer to predict appropriate touch interaction locations in space and time. In this paper, we describe the design and implementation of RoboJam's network and how it has been integrated into a touchscreen music app. A preliminary evaluation analyses the system in terms of training, musical generation and user interaction

    A Standardised Procedure for Evaluating Creative Systems: Computational Creativity Evaluation Based on What it is to be Creative

    Get PDF
    Computational creativity is a flourishing research area, with a variety of creative systems being produced and developed. Creativity evaluation has not kept pace with system development with an evident lack of systematic evaluation of the creativity of these systems in the literature. This is partially due to difficulties in defining what it means for a computer to be creative; indeed, there is no consensus on this for human creativity, let alone its computational equivalent. This paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS). SPECS is a three-step process: stating what it means for a particular computational system to be creative, deriving and performing tests based on these statements. To assist this process, the paper offers a collection of key components of creativity, identified empirically from discussions of human and computational creativity. Using this approach, the SPECS methodology is demonstrated through a comparative case study evaluating computational creativity systems that improvise music

    Composing first species counterpoint with a variable neighbourhood search algorithm

    Get PDF
    In this article, a variable neighbourhood search (VNS) algorithm is developed that can generate musical fragments consisting of a melody for the cantus firmus and the first species counterpoint. The objective function of the algorithm is based on a quantification of existing rules for counterpoint. The VNS algorithm developed in this article is a local search algorithm that starts from a randomly generated melody and improves it by changing one or two notes at a time. A thorough parametric analysis of the VNS reveals the significance of the algorithm's parameters on the quality of the composed fragment, as well as their optimal settings. A comparison of the VNS algorithm with a developed genetic algorithm shows that the VNS is more efficient. The VNS algorithm has been implemented in a user-friendly software environment for composition, called Optimuse. Optimuse allows a user to specify a number of characteristics such as length, key and mode. Based on this information, Optimuse 'composes' both cantus firmus and first species counterpoint. Alternatively, the user may specify a cantus firmus, and let Optimuse compose the accompanying first species counterpoint. © 2012 Taylor & Francis

    Biodiversity Trends along the Western European Margin

    Get PDF

    Splicing-Inspired Recognition and Composition of Musical Collectives Styles

    No full text
    Computer music is an emerging area for the application of computational techniques inspired by information processing in Nature. A challenging task in this area is the automatic recognition of musical styles. The style of a musician is the result of the combination of several factors such as experience, personality, preferences. In the last years, several works have been proposed for the recognition of styles for soloists performers, where the improvisation often plays an important role. The evolution of this problem, that is the recognition of multiple performers' style that collaborate over time to perform, record or compose music, know as Musical collective, presents many more difficulties, due to the simultaneous presence of various performers, mutually conditionable. In this paper, we propose a new approach for both recognition and automatic composition of styles for musical collectives. Specifically, our system exploits a machine learning recognizer, based on one-class support vector machines and neural networks for style recognition, and a splicing composer, for music composition (in the style of the whole collective). To assess the effectiveness of our system we performed several tests using transcriptions of popular jazz bands. With regard to the recognition, we show that our classifier is able to achieve an accuracy of 97.7%. With regard to the composition, we measured the quality of the generated compositions by collecting subjective perceptions from domain experts

    Interactive, Evolutionary Textured Sound Composition

    No full text
    We describe a system that maps the interaction between two people to control a genetic process for generating music. We start with a population of melodies encoded genetically. This population is allowed to breed every biological cycle creating new members of the population based upon the semantics of the spatial relationship between two people moving in a large, physical space. A pre-specified hidden melody is used to select a melody from the population to play every musical cycle. The overlapping of selected melodies provides an intriguing textured musical space
    corecore